-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LeakyRelu #13488 #13492
LeakyRelu #13488 #13492
Conversation
Hi @sherry30 , I am trying to implement the LeakyRelu function for the tensorflow raw_ops. |
hey @sherry30 , while running the tests locally, I get this error
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi,
Your PR looks good but a few changes are required.
First off, There are lint errors introduced by your changes, can you fix those?
Other than that, I've requested changes on specific lines.
The error that you're facing seems to be because you're testing on numeric
data types while this tf function only expects floats. So you can change this where you're generating data in the test function.
Hope this review was helpful, let me know if you need help with anything else 😃
Thanks
@@ -487,3 +487,7 @@ def relu6(features, name=None): | |||
@to_ivy_arrays_and_back | |||
def softmax(logits, axis=None, name=None): | |||
return ivy.softmax(logits, axis=axis) | |||
|
|||
|
|||
def leaky_relu(features, name=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tf.nn.leaky_relu
also accepts alpha
as an argument, you also have to pass that here
LeakyRelu = to_ivy_arrays_and_back( | ||
map_raw_ops_alias( | ||
tf_frontend.nn.leaky_relu, | ||
kwargs_to_update={"x":"features"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no need for this line since raw_ops version of leaky_realu
has the same argument name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i will just remove this line right
@@ -3598,4 +3625,4 @@ def test_tensorflow_BatchMatMulV3( | |||
Tout=Tout, | |||
adj_x=adj_x, | |||
adj_y=adj_y, | |||
) | |||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
accidentally changed... ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah
on_device, | ||
): | ||
dtype, x = dtype_and_x | ||
alpha = 0.2 # set the alpha value for LeakyReLU |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you shouldn't hard code this value of alpha
for the test, you should generate this with hypothesis in the decorator above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hey @sherry30 , I am getting various error while trying to generate data via hypothesis for the alpha variable.
kindly check on to it. can u just give me a rough outline.
@handle_frontend_test( | ||
fn_tree="tensorflow.raw_ops.LeakyRelu", | ||
dtype_and_x=helpers.dtype_and_values( | ||
available_dtypes=helpers.get_dtypes("numeric"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change numeric
to float
since tf.nn.leaky_relu
only expects float data types
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi,
Seems like this function hasn't been assigned to you in this ToDo list here. You can follow this doc to get it assigned.
Other than that, Can you remove other extra changes I specified on specific lines.
Also can you add a test for leaky_relu
in the nn
module since you added that function as well, It will be pretty similar to this one you've added for raw_ops version of leaky_relu
.
Let me know if you have any other questions 😄
Thanks
@@ -3598,6 +3633,7 @@ def test_tensorflow_BatchMatMulV3( | |||
Tout=Tout, | |||
adj_x=adj_x, | |||
adj_y=adj_y, | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this
@@ -3622,3 +3658,4 @@ def test_tensorflow_Size( # NOQA | |||
input=x[0], | |||
out_type=output_dtype, | |||
) | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove this
hey @sherry30 , can u tell me about the data generation for the alpha variable. |
@sherry30 I made the comment for the issue assignment. |
@sherry30 I have made some changes, and tried to generate data for alpha but it shows error, should I make a composite strategy for it or would it be unnecessary. |
@sherry30 while testing the code, it showed this error.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Kaushal,
A composite strategy would be unnecessary since this can be done by simply helpers.floats
, let me know if that gives some errors.
The 2nd error you're getting is because ivy.leaky_relu
doesn not accept alpha
as a positional argument. to fix this I've requested change on specific line.
thanks
@@ -489,7 +489,11 @@ def softmax(logits, axis=None, name=None): | |||
return ivy.softmax(logits, axis=axis) | |||
|
|||
|
|||
def leaky_relu(features, alpha, name=None): | |||
return ivy.leaky_relu(features, alpha) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be alpha=alpha
on this line
hi @sherry30 , after making the changes it passed all three tests (numpy, jax, tensorflow) but for the pytorch it failed and showed this error
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi,
Just a few more changes requested. Can you also checkout the lint errors. there are some new introduced from your changes. Once this is done I'll merge it 👍
Thanks
@@ -15,6 +15,36 @@ | |||
) | |||
|
|||
|
|||
@handle_frontend_test( | |||
fn_tree="tensorflow.raw_ops.LeakyRelu", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this test function is supposed to test the leaky_relu
in the nn module.
change it to that function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
test_with_out=st.just(False), | ||
alpha=helpers.floats(min_value=0, max_value=1) | ||
) | ||
def test_tensorflow_LeakyReLU( # NOQA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change the name to test_tensorflow_leaky_relu
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
hey @sherry30 , I formatted the code, the lint doesn't shows any error while running locally, but here it shows error. |
You can check the details of the lint error here and then open up |
I did that |
hey @sherry30 lint errors, from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright lgtm 👍
Thanks for the contribution 💪 and sorry for such a long process 😅
No worries, thanks @sherry30 for your help. |
Co-authored-by: sherry30 <[email protected]>
No description provided.